Why Human Intuition Is Still Science’s Greatest Tool In The Age Of AI

Our sense for aesthetics, meaning and embodiment give us a vital advantage over our technological creations.

Muhammad Fatchurofi for Noema Magazine Muhammad Fatchurofi for Noema Magazine
Muhammad Fatchurofi for Noema Magazine
Credits

Conor Feehly is a science writer from New Zealand whose work explores the intersection of physics, biology and philosophy of mind.

Nestled in a valley on the industrial outskirts of Wellington, New Zealand, a nuclear fusion startup called OpenStar Technologies is at work on a radical mission. It is one of roughly 50 companies worldwide racing to design a nuclear fusion reactor capable of supplying carbon-free electricity to the grid at a commercial scale. Although fusion promises to be a clean, safe alternative to current nuclear power — and is widely considered to be a final hurdle to endless energy — it has so far proven to be a nut nobody can crack.

OpenStar’s founder and CEO, Ratu Mataira, has hedged his bets on a levitated dipole fusion reactor design, where an internal levitating magnet aims to confine swirling deuterium plasma superheated to temperatures that rival those at the center of our home star. A young Māori scientist and entrepreneur, Mataira is a modern embodiment of Māui, a character from Māori mythology who ambushed and ensnared the sun with woven flax ropes, slowing its journey through the sky to create more daylight hours. Like Māui, Mataira dreams of capturing the power of the sun.

It is no simple feat: For the best part of a century, physicists have understood most of what happens when you smash subatomic particles together at extremely high temperatures and pressures, but the large-scale behavior of plasmas under various conditions is a frontier that researchers are only beginning to map out. 

Here, at the precipice of human knowledge, something happens that you might not expect. Intuition injects itself into the practice of science. 

“Once you’re actually working at the cutting edge of the unknown, intuition has to play a part, because there isn’t an instruction manual,” Mataira explains. “Nobody has written down the perfect rational answer for you yet.”

Mataira and his team have drawn on careful intuitive reasoning to guide certain experiments and develop explanations for how their reactor works, which he credits for some of OpenStar’s progress. 

Intuition isn’t something we typically associate with science. Where intuition can be fast, subjective and unreliable, science tends to be gradual, objective and based on experimental evidence and rationality. 

When we intuit, we have a sense of knowing something without necessarily understanding how we arrived at that conclusion. In science, if you can’t tell a coherent story based on evidence and a lineage of ideas, it’s not considered solid or reliable. Broadly speaking, we expect the practice of science to steer clear of intuition and other things that make us human, like our feelings and biases, to build stronger foundations for our knowledge.

And yet, when we apply a more nuanced lens to how science is actually done, these sharp boundaries begin to dissolve.

Physicists, mathematicians and other scientists across the disciplinary spectrum spend many of their waking hours inhabiting abstract, theoretical worlds. In our everyday lives, we typically don’t deal with superheated forms of matter found in stars or contemplate the weird behavior of subatomic particles as described by quantum mechanics. Scientists, however, are able to cultivate their intuitions when it comes to these truly alien domains — to develop a feel for phenomena that evolution and our development arguably never prepared us for.

Contrary to popular belief, human intuition is one of science’s most valuable tools. It has been a spark behind some of our greatest scientific breakthroughs. At age 16, Albert Einstein imagined chasing after a beam of light, which he would later say informed his theory of special relativity. Friedrich August Kekulé, who discovered the benzene ring structure, described following a hunch that came to him in a dream.

Now, at a time when we are increasingly outsourcing scientific methods to machine learning and artificially intelligent tools, it is more important than ever to recognize the value of human intuition in science.

What Is Intuition?

Psychologists interested in human reasoning, such as Daniel Kahneman, have traditionally separated intuition from rationality. The former, which Kahneman classified as “System 1 Thinking,” is described as fast and emotional, while the latter, “System 2 Thinking,” is slower, deliberative and logical.

You might have heard variations of this delineation: implicit versus explicit, heuristic versus systematic, automatic versus controlled. The point is that intuitive reasoning seems to come from somewhere that is unconscious, embodied and aesthetically driven, which sits in contrast to conscious, analytical, step-by-step reasoning. 

For decades, researchers in the cognitive sciences have been debating whether what we find intuitive stems from hardwired knowledge we inherit at birth, or whether we build up our intuitions as we interact with the world. 

“At the precipice of human knowledge, something happens that you might not expect: Intuition injects itself into the practice of science.”

As developmental psychologist Alison Gopnik describes in her book, “The Scientist in the Crib, infants do experiments through play to investigate the causal structure of the world. She argues that through this type of early probing and intervening, we develop an “intuitive physics” for how objects and events relate to one another in our three dimensional, temporal reality.

We learn to expect objects to move away from us when we push them, to remain intact when we look away and to bounce off one another when they come into contact. Our capacity to build this intuitive physical model deeply depends on our ability to interface with the world through our bodies.

But what about the things we can’t meaningfully interact with in everyday life — when we probe down into the nether regions of matter itself, or when we try to conceive of deep time at cosmic scales? 

“We don’t have very good intuitions about things like orders of magnitude,” says Melanie Mitchell, a cognitive scientist who works at the boundary of human reasoning and artificial intelligence at the Santa Fe Institute. As evidenced in the strangeness of quantum mechanics or in the abstract spaces of theoretical mathematics, our gut feelings about how the world works often break down at the frontiers of human understanding. What use are our intuitions then? 

In 1980, psychologists George Lakoff and Mark Johnson published “Metaphors We Live By,” a now highly influential book that argues people use metaphors and analogy from their direct physical and social experiences as tools to scaffold their understanding into more abstract domains. 

Take, as an example, the sentence “I came closer to understanding.” At first glance, it seems straightforward. But the word “closer” — a metaphorical reference to spatial proximity — can easily relate to firsthand embodied experience. Here, it denotes an increase in “understanding,” an abstract concept relating to knowledge, which itself has no spatial parameters. We use the embodied sensation of moving through space to meaningfully relate to something conceptual. 

“This is a big part of how we understand the world and how we reason about it, so I don’t think ‘reasoning’ is separate from that,” explains Mitchell. “Scientists use metaphors and analogies more than they probably realize, so it’s not necessarily this clean, cold, deductive process that people picture scientists doing.”

Mataira does this often to help non-experts, including investors and members of the public, intuitively understand the complex internal workings of his fusion reactor. He uses an analogy of something people are more familiar with: the weather that gives rise to fog.  

In typical weather, you often have wind, which contributes to a kind of turbulence when trying to return the system to some state of energetic equilibrium. An increase in temperature, like at sunrise, changes the weather system. As the ground heats up, it warms the air above it, which causes convective currents of warm air rising and cold air rushing in to fill the space left behind. 

Sometimes, though, the ground cools off so effectively at night that it also greatly cools the air above, so the temperature ends up being cooler at ground level and warmer at greater heights. At a certain height, though, the air temperature will drop off again. That inversion layer creates stagnant fogs since there is no thermodynamic reason for the gases in the atmosphere to travel like they normally would. 

Much like the two distinct regions of cold air in a fog, inside Mataira’s fusion machine, there are two distinct regions of plasma due to the presence of the levitating magnet: the plasma that is closest to the magnet, and, further from the magnet, an inverted region of plasma where the pressure drops off. This creates two regions that behave in very different ways. One is calm, like the air just above ground level during foggy conditions, and the other is very turbulent.  

“That [weather] scenario is just like the extra region that we have in our system. All the same arguments, all the same logic, but one is a high falutin fusion reactor, and the other is fog that would freeze out a cherry orchard,” says Mataira, who also uses this analogy to help guide his own reasoning about how to conduct experiments. 

“You can do a lot of this reasoning by analogy … it’s got all the technical jargon around it because people are trying to understand it in scientific papers, but it is really just another manifestation of something we are already familiar with from everyday experience.”

Science, Intuition And AI

Much like the child who learns about the world through play, developing intuitive physics as she navigates her surroundings, scientists make discoveries by exploring and investigating the intricacies of their fields. 

“Our gut feelings about how the world works often break down at the frontiers of human understanding.”

In popular media, we sometimes hear about the “counterintuitive” behavior of phenomena at the quantum mechanical level. One example is tunnelling, which occurs when a subatomic particle passes through an otherwise impenetrable barrier. Our everyday intuition, based on the intuitive physics we developed through childhood, would typically rule out this type of event. When we throw a tennis ball at a wall, we expect it to bounce back, after all. However, because the location of a subatomic particle is based on a field of probability, if a barrier sits within that field, the particle could end up on either side of that barrier. 

“When you learn classical physics, it’s relatively easy to see what’s going on. You drop something and it falls. With quantum behavior, the hard bit is that there is nothing here that I have on my desk that I could show you and say, ‘Ok, here’s how we can demonstrate quantum mechanics,’” says Carrie Weidner, a lecturer in quantum engineering at the University of Bristol in the U.K. 

“The fact that I can sit here and talk about what is happening to an atom like, ‘Well, it is probably tunnelling,’ like that statement is so anathema to everything I was exposed to for the first 18, 20 years of my life. It’s akin to saying, ‘Here, hand me that grand piano,’” says Weidner. 

However, early in her career Weidner spent three years working with Jacob Sherson at Denmark’s Aarhus University developing “quantum games” with rules based on quantum mechanical phenomena like tunnelling. The goal was “the demystification of quantum mechanics,” she says. 

“If you can hand someone a game and say, ‘Play with it,’ whether it’s a virtual lab or quantum chess, people will learn how things work. By playing it you build up that intuition.”

Our ability to develop a feel for phenomena that defy our ordinary understanding of the world raises a fundamental question about the flexibility of the human mind. How strange or seemingly illogical can things get before our intuitive houses of cards collapse and reality becomes completely unintelligible?

“That is one of the questions I find most compelling … Can we move outside of these attractors set up by evolution and cognitive development by the act of thinking itself?” asks Ruairidh Battleday, a cognitive scientist who works at Harvard’s Center for Brain Science and MIT’s Center for Brains, Minds, and Machines. “My intuition is that we can.”

Our intuitions are strongly constrained by our experiences and environments. The more time we spend in a given world — like, say, playing a quantum game — the more intuitive the rules of that world will become. This is why highly complex or seemingly counterintuitive concepts can become second nature to scientists like Weidner who immerse themselves in their fields. 

“Mathematicians, for example, are able to get into some really different cognitive spaces,” says Battleday. “I used to have dinner with this PhD in mathematics and he genuinely said he was getting up to visualizing in four dimensions.”

It’s possible that we are hardwired to interface with the world in only three spatial dimensions and a temporal one — the parameters within which our intuitive physics exists. Or, with enough exposure, we may be capable of leaving those confines behind to develop an intuition in realms that are radically different. While we tend to assume that evolution baked into the human cognitive system a set of assumptions for how the world is configured, it’s possible that it instead baked in adaptability and flexibility: a mind capable of representing any reality it happens to find itself in.

Intuition plays a critical role in the genesis of a scientific research program, Battleday says. A scientist might have an intuition for a new way of looking at things in their area of expertise, or conversely, they might have an intuition that an existing standard is mistaken. 

“Something that great scientists do — somehow — is preserve the raw motivation and energy right the way through getting the initial shape of an idea to the final published paper,” says Battleday. “I think the story of how that shape unfolds often gets lost in scientific communication.” 

This human drive to understand the world around us, to ask better and better questions that probe the unknown, is what sets us apart from artificially intelligent systems that are increasingly deployed into the practice of science. AI systems can now match human abilities in terms of brute calculation and even statistical inference. They often excel in formal domains like mathematics and logic. 

“The best science is done when intuition is applied in collaboration with analytical, critical and skeptical thought.”

And yet, “They can’t ask the right questions,” Mitchell notes. “That’s a big, important thing in science; they don’t exactly know where to go next.” 

She suspects this is because large language models (LLMs) are unable to intervene and conduct experiments, things that require a physical body. There is a passivity to the way they learn: You give them data and they process it with an algorithm, in contrast to the more active, interventionist modes of human learning. We learn by doing, not receiving. 

While artificially intelligent systems are helping scientists identify new correlations or anomalies in nature, they still need to be prompted to search for those phenomena in the first place. They also ultimately need the human scientist to provide a novel explanation for what they find — to suggest mechanisms that make sense of the correlation or anomaly. 

We can close our eyes and imagine the motion of the planets, the warped spacetime around a supermassive black hole or a fluctuating soup of subatomic particles. It is this aesthetic familiarity that allows us to generalize, form laws, postulate mechanisms and come to genuine understanding, rather than just correlation or prediction. 

Like Charon from Greek mythology, the ferryman who guides the deceased across the River Styx into the unknown of the underworld, intuition guides scientists when they voyage into unmapped waters. Without the cultivated, meaning-laden, embodied intuition of the human scientist who is instinctively driven to ask questions about the nature of the world, AI systems remain rudderless when they enter the realms of uncertainty, the spaces that science tries to penetrate.

Checking Our Assumptions

The common perception of science as a cold, methodical practice is based in part on outdated caricatures of who scientists are and what scientists do. In popular media they have traditionally been portrayed as antisocial, neurotic, lab-coat wearing individuals who tinker away in labs and obsess over small details. 

“I think very few people aspire to be a scientist because they think that it can be a way of eliminating all of the uncertainty and creativity out of their life. People are definitely attracted to the fact that the rational tools give you concrete answers; they find that very satisfying,” says Mataira.

“But anybody who wants to stay in science and tries to get into research science, if they have come into it with a mentality of trying to avoid the intuitive and creative, they will fail. I can’t think of any example where avoiding those things made someone more successful.”

Of course, that’s not to say we can just follow our intuitions and everything will fall neatly into place. The best science is done when intuition is applied in collaboration with analytical, critical and skeptical thought, even though rationality and intuition are often framed in opposition. 

History is filled with examples of times when science has revealed our intuitions to be mistaken. Earth is not the center of the universe. Particles can behave in non-causal ways. Time doesn’t flow in a unitary, universal fashion. 

The cultivated intuition of scientists allows them to ask better questions, but they have to check them, to scrutinize them. That, in part, is what separates science from other ways of knowing. 

As a drummer and music maker, I like to think about the relationship between science and intuition as similar to the process of learning to play an instrument. When starting out, you have to apply your conscious mind to the finer details of how you are using your body; you have to focus on how you are hitting the drum, your technique. When learning a new scientific discipline, you also have to consciously learn the concepts, understand the mathematics and engage with the canon. 

But as you learn, you are slowly able to essentially automate stuff that you previously had to really concentrate on, at which point it becomes instinctual and a skill. Once you have raised yourself to this higher level, you become a drummer, and you can play this meshing game with other people and instruments. You’re now ready to jam, to make music with others — an intuitive, creative process. Or if you’re a scientist, you’re ready to suggest new experiments or propose new concepts or mechanisms that might extend the explanatory reach of your field — “jamming” in the scientific sense. 

“The cultivated intuition of the scientist remains the creative spark that drives science forward.”

One of the things Mataira has enjoyed the most about AI is not its capabilities, but how it reveals the things that are truly special about humans versus the things we assumed were special. We generally consider our linguistic ability and reasoning to be uniquely human, for instance. So much so that if something can mimic that ability (say, by passing a Turing Test), we might say that that thing possesses human-level intelligence. Instead, AI’s ability to mimic human linguistic ability shows that language is just another tool in the toolkit, not necessarily the thing that makes us who we are. 

“When we make these leaps of intuition and we try to scaffold this kind of reasoning, we know what we are doing. We know we are scaffolding. We know we are grasping at something that’s real, and one day, we hope, we might actually touch it and validate that. But I don’t think the AIs know that they are only grasping,” says Mataira. 

An important distinction between what an LLM does (string together words in a cogent way) versus what a scientist does (ask questions that grasp at the nature of reality) is that the LLM can’t tell the difference.    

“A lot of this is related to what we call metacognition,” explains Mitchell. “A big part of our thinking is an awareness of our own thinking, about our own certainty, our own state of knowledge, how confident we are in what we are doing, how much we like an idea that we have thought about, and this is something people have said over and over that machines really need.” 

While an LLM will say everything with equal confidence, it doesn’t have a sense of its own mental processes. It just has probabilities of what should be the next token or symbol, with some algorithmic parameters built around that. Human metacognition, however, plays a crucial role in how we come to the next question or idea because we know where our own shortcomings are, and so our scientific questions and intuitions are directed toward those gaps. 

The cultivated intuition of the scientist remains the creative spark that drives science forward. When we paint science as this cold, methodical, purely rational process, we neglect the fact that science is still ultimately done by people. 

Yes, we bring our baggage to the process, our biases, our agendas. But we also bring our intuition, meta-level self-awareness and creativity, the crucial things that AIs still lack. As we increasingly outsource facets of the scientific process to our technological creations, we must remember the value that we bring to science as people.